Goto

Collaborating Authors

 alison gopnik


Language Agents Mirror Human Causal Reasoning Biases. How Can We Help Them Think Like Scientists?

GX-Chen, Anthony, Lin, Dongyan, Samiei, Mandana, Precup, Doina, Richards, Blake A., Fergus, Rob, Marino, Kenneth

arXiv.org Artificial Intelligence

Language model (LM) agents are increasingly used as autonomous decision-makers which need to actively gather information to guide their decisions. A crucial cognitive skill for such agents is the efficient exploration and understanding of the causal structure of the world -- key to robust, scientifically grounded reasoning. Yet, it remains unclear whether LMs possess this capability or exhibit systematic biases leading to erroneous conclusions. In this work, we examine LMs' ability to explore and infer causal relationships, using the well-established Blicket Test paradigm from developmental psychology. We find that LMs reliably infer the common, intuitive disjunctive causal relationships but systematically struggle with the unusual, yet equally (or sometimes even more) evidenced conjunctive ones. This "disjunctive bias" persists across model families, sizes, and prompting strategies, and performance further declines as task complexity increases. Interestingly, an analogous bias appears in human adults, suggesting that LMs may have inherited deep-seated reasoning heuristics from their training data. To this end, we quantify similarities between LMs and humans, finding that LMs exhibit adult-like inference profiles (but not child-like). Finally, we propose a test-time sampling method which explicitly samples and eliminates hypotheses about causal relationships from the LM. This scalable approach significantly reduces the disjunctive bias and moves LMs closer to the goal of scientific, causally rigorous reasoning.


Imitation & Innovation In AI - FoundersList

#artificialintelligence

Speaker: Alison Gopnik, Distinguished Professor of Psychology, UC Berkeley Talk Title: Imitation & Innovation in AI: What Four-year-olds Can Do & AI Can't (Yet) About Talk: Young children's learning may be an important model for artificial intelligence (AI). Comparing children & artificial agents in the same tasks & environments can help us understand the abilities of existing systems & create new ones. In particular, many current large data-supervised systems, such as large language models (LLMs), provide new ways to access information collected by past agents. However, they lack the kinds of exploration & innovation that are characteristic of children. New techniques may help to instantiate child-like curiosity, exploration & play in AI systems.


Everyday AI podcast series

AIHub

In a new podcast series, Everyday AI, host Jon Whittle (CSIRO) explores the AI that is already shaping our lives. With the help of expert guests, he explores how AI is used in creative industries, health, conservation, sports and space. Episode 4: AI and citizen science – AI in ecology This episode features Jessie Barry from Cornell University's Macaulay Library and Merlin Bird ID, ichthyologist Mark McGrouther, and Google's Megha Malpani. Episode 6: The final frontier – AI in space This episode features Astrophysicist Kirsten Banks, NASA researcher Dr Raymond Francis, and Research Astronomer Dr Ivy Wong.


ACRE: Abstract Causal REasoning Beyond Covariation

Zhang, Chi, Jia, Baoxiong, Edmonds, Mark, Zhu, Song-Chun, Zhu, Yixin

arXiv.org Artificial Intelligence

Causal induction, i.e., identifying unobservable mechanisms that lead to the observable relations among variables, has played a pivotal role in modern scientific discovery, especially in scenarios with only sparse and limited data. Humans, even young toddlers, can induce causal relationships surprisingly well in various settings despite its notorious difficulty. However, in contrast to the commonplace trait of human cognition is the lack of a diagnostic benchmark to measure causal induction for modern Artificial Intelligence (AI) systems. Therefore, in this work, we introduce the Abstract Causal REasoning (ACRE) dataset for systematic evaluation of current vision systems in causal induction. Motivated by the stream of research on causal discovery in Blicket experiments, we query a visual reasoning system with the following four types of questions in either an independent scenario or an interventional scenario: direct, indirect, screening-off, and backward-blocking, intentionally going beyond the simple strategy of inducing causal relationships by covariation. By analyzing visual reasoning architectures on this testbed, we notice that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning. These deficiencies call for future research in models with a more comprehensive capability of causal induction.


Want your child to be a greatest thinker: allow to play more!

#artificialintelligence

A plain playground seems attractive over hundreds of man-hours beautifully chiseled and crafted classroom for a child. Imagine if classrooms were like a playground, the learning would be at its highest. This might not be attractive to educators and parents because of constant pressure of academic readiness. This'attractiveness' isn't supported by volumes of research, play is vital for scientific inquiry and conceptual thinking. Play is fun and voluntary.


Sydney Ideas - When (and why) children are smarter than adults, and AI too

#artificialintelligence

Young children are actually better at learning unusual or unlikely principles than adults. Professor Alison Gopnik's research relates this pattern to computational ideas about search and sampling, evolutionary ideas about human life history, and neuroscience findings about plasticity. Alison's hypothesis is that the evolution of our distinctively long, protected human childhood allows an early period of broad hypothesis search, exploration and creativity before the demands of goal-directed action set in. Alison Gopnik is a professor of psychology and affiliate professor of philosophy at the University of California at Berkeley. She received her BA from McGill University and her PhD from Oxford University.


Possible Minds: 25 Ways of Looking at AI

#artificialintelligence

John Brockman: On the Promise and Peril of AI • Seth Lloyd: Wrong, but More Relevant Than Ever • Judea Pearl: The Limitations of Opaque Learning Machines • Stuart Russell: The Purpose Put Into the Machine • George Dyson: The Third Law • Daniel C. Dennett: What Can We Do? • Rodney Brooks: The Inhuman Mess Our Machines Have Gotten Us Into • Frank Wilczek: The Unity of Intelligence • Max Tegmark: Let's Aspire to More Than Making Ourselves Obsolete • Jaan Tallinn: Dissident Messages • Steven Pinker: Tech Prophecy and the Underappreciated Causal Power of Ideas • David Deutsch: Beyond Reward and Punishment • Tom Griffiths: The Artificial Use of Human Beings • Anca Dragan: Putting the Human into the AI Equation • Chris Anderson: Gradient Descent • David Kaiser: "Information" for Wiener, for Shannon, and for Us • Neil Gershenfeld: Scaling • W. Daniel Hillis: The First Machine Intelligences • Venki Ramakrishnan: Will Computers Become Our Overlords?


61. Alison Gopnik (Developmental Psychologist) – Artificial Intelligence/Natural Stupidity - Panoply

#artificialintelligence

Alison Gopnik is an internationally recognized expert in children's learning and development. Her new book The Gardener and the Carpenter is a response to the fact that "parenting" has become a verb, a powerful middle class trend, a lucrative self-help industry, and sometimes a kind of bloodsport. Meanwhile developmental science paints a very different picture of how children grow and learn, and what it means to be a good parent. As Gopnik puts it, "It's easy to say'just chill,' but the advice is, basically, just chill!" On this week's episode of Think Again–a Big Think Podcast, Alison Gopnik and host Jason Gots discuss play, artificial intelligence, and the trouble with "parenting" as a verb.